research plan and status
ML-Agent: Reinforcing LLM Agents for Autonomous Machine Learning Engineering
Liu, Zexi, Chai, Jingyi, Zhu, Xinyu, Tang, Shuo, Ye, Rui, Zhang, Bo, Bai, Lei, Chen, Siheng
The emergence of large language model (LLM)-based agents has significantly advanced the development of autonomous machine learning (ML) engineering. However, most existing approaches rely heavily on manual prompt engineering, failing to adapt and optimize based on diverse experimental experiences. Focusing on this, for the first time, we explore the paradigm of learning-based agentic ML, where an LLM agent learns through interactive experimentation on ML tasks using online reinforcement learning (RL). To realize this, we propose a novel agentic ML training framework with three key components: (1) exploration-enriched fine-tuning, which enables LLM agents to generate diverse actions for enhanced RL exploration; (2) step-wise RL, which enables training on a single action step, accelerating experience collection and improving training efficiency; (3) an agentic ML-specific reward module, which unifies varied ML feedback signals into consistent rewards for RL optimization. Leveraging this framework, we train ML-Agent, driven by a 7B-sized Qwen-2.5 LLM for autonomous ML. Remarkably, despite being trained on merely 9 ML tasks, our 7B-sized ML-Agent outperforms the 671B-sized DeepSeek-R1 agent. Furthermore, it achieves continuous performance improvements and demonstrates exceptional cross-task generalization capabilities.
- Asia > China > Shanghai > Shanghai (0.04)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- Asia > Thailand > Bangkok > Bangkok (0.04)
- Workflow (1.00)
- Research Report > New Finding (0.46)
Cybench: A Framework for Evaluating Cybersecurity Capabilities and Risk of Language Models
Zhang, Andy K., Perry, Neil, Dulepet, Riya, Jones, Eliot, Lin, Justin W., Ji, Joey, Menders, Celeste, Hussein, Gashon, Liu, Samantha, Jasper, Donovan, Peetathawatchai, Pura, Glenn, Ari, Sivashankar, Vikram, Zamoshchin, Daniel, Glikbarg, Leo, Askaryar, Derek, Yang, Mike, Zhang, Teddy, Alluri, Rishi, Tran, Nathan, Sangpisit, Rinnara, Yiorkadjis, Polycarpos, Osele, Kenny, Raghupathi, Gautham, Boneh, Dan, Ho, Daniel E., Liang, Percy
Language Model (LM) agents for cybersecurity that are capable of autonomously identifying vulnerabilities and executing exploits have the potential to cause real-world impact. Policymakers, model providers, and other researchers in the AI and cybersecurity communities are interested in quantifying the capabilities of such agents to help mitigate cyberrisk and investigate opportunities for penetration testing. Toward that end, we introduce Cybench, a framework for specifying cybersecurity tasks and evaluating agents on those tasks. We include 40 professional-level Capture the Flag (CTF) tasks from 4 distinct CTF competitions, chosen to be recent, meaningful, and spanning a wide range of difficulties. Each task includes its own description, starter files, and is initialized in an environment where an agent can execute bash commands and observe outputs. Since many tasks are beyond the capabilities of existing LM agents, we introduce subtasks, which break down a task into intermediary steps for more gradated evaluation; we add subtasks for 17 of the 40 tasks. To evaluate agent capabilities, we construct a cybersecurity agent and evaluate 7 models: GPT-4o, Claude 3 Opus, Claude 3.5 Sonnet, Mixtral 8x22b Instruct, Gemini 1.5 Pro, Llama 3 70B Chat, and Llama 3.1 405B Instruct. Without guidance, we find that agents are able to solve only the easiest complete tasks that took human teams up to 11 minutes to solve, with Claude 3.5 Sonnet and GPT-4o having the highest success rates. Finally, subtasks provide more signal for measuring performance compared to unguided runs, with models achieving a 3.2\% higher success rate on complete tasks with subtask-guidance than without subtask-guidance. All code and data are publicly available at https://cybench.github.io
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > New York (0.04)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- (3 more...)
- Workflow (0.68)
- Research Report (0.50)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military > Cyberwarfare (1.00)